9 research outputs found

    eXplainable AI for trustworthy healthcare applications

    Get PDF
    Acknowledging that AI will inevitably become a central element of clinical practice, this thesis investigates the role of eXplainable AI (XAI) techniques in developing trustworthy AI applications in healthcare. The first part of this thesis focuses on the societal, ethical, and legal aspects of the use of AI in healthcare. It first compares the different approaches to AI ethics worldwide and then focuses on the practical implications of the European ethical and legal guidelines for AI applications in healthcare. The second part of the thesis explores how XAI techniques can help meet three key requirements identified in the initial analysis: transparency, auditability, and human oversight. The technical transparency requirement is tackled by enabling explanatory techniques to deal with common healthcare data characteristics and tailor them to the medical field. In this regard, this thesis presents two novel XAI techniques that incrementally reach this goal by first focusing on multi-label predictive algorithms and then tackling sequential data and incorporating domainspecific knowledge in the explanation process. This thesis then analyzes the ability to leverage the developed XAI technique to audit a fictional commercial black-box clinical decision support system (DSS). Finally, the thesis studies AI explanation’s ability to effectively enable human oversight by studying the impact of explanations on the decision-making process of healthcare professionals

    Co-design of human-centered, explainable AI for clinical decision support

    Get PDF
    eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models, and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique, and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback, with a two-fold outcome: first, we obtain evidence that explanations increase users’ trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so that we can re-design a better, more human-centered explanation interface

    Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients

    Get PDF
    Abstract—The paper's main contributions are twofold: to demonstrate how to apply the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare; and to investigate the research question of what does “trustworthy AI” mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia (Italy) since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses socio-technical scenarios to identify ethical, technical and domain-specific issues in the use of the AI system in the context of the pandemic.</p

    Assessing the use of mobile phone data to describe recurrent mobility patterns in spatial epidemic models

    No full text
    International audienceThe recent availability of large-scale call detail record data has substantially improved our ability of quantifying human travel patterns with broad applications in epidemiology. Notwithstanding a number of successful case studies, previous works have shown that using different mobility data sources, such as mobile phone data or census surveys, to parametrize infectious disease models can generate divergent outcomes. Thus, it remains unclear to what extent epidemic modelling results may vary when using different proxies for human movements. Here, we systematically compare 658 000 simulated outbreaks generated with a spatially structured epidemic model based on two different human mobility networks: a commuting network of France extracted from mobile phone data and another extracted from a census survey. We compare epidemic patterns originating from all the 329 possible outbreak seed locations and identify the structural network properties of the seeding nodes that best predict spatial and temporal epidemic patterns to be alike. We find that similarity of simulated epidemics is significantly correlated to connectivity, traffic and population size of the seeding nodes, suggesting that the adequacy of mobile phone data for infectious disease models becomes higher when epidemics spread between highly connected and heavily populated locations, such as large urban areas

    Causality and Explanation in ML: a Lead from the GDPR and an Application to Personal Injury Damages

    No full text
    Although the distinction between prediction and explanation is well established in the philosophy of science, statistical modeling techniques too often overlook the practical implications of such theoretical divergence. Can predictive and explanatory models be recognized as complements rather than substitutes? We argue that predictive and explanatory modeling need not be seen as in conflict: this two so far parallel approaches would largely benefit one from the other and the contamination between the two might be one of the central topics in statistical modeling in the years to come. Most importantly, we show that the need for this convergence is made apparent by the requirements imposed by the EU General DataProtection Regulation (GDPR), and it is of paramount importance when dealing with legal data. We also show how the demand to meaningfully clarify the logic behind solely automated decision-making processes creates a unique incentive to reconcile two seemingly contradictory scientific paradigms. In addition, by looking at 2585 Italian cases related to personal injury compensation, we develop a simple application to map the space of judges’ decisions and, using state-of-the-art multi-label algorithms, we classify such decisions according to the relevant heads of damages. As a matter of fact, drawing causal evidence from the analysis might be dangerous: if we want machines to improve human decisions, we need more robust, generalized, and explainable models

    Assessing Trustworthy AI in Times of COVID-19.: Deep Learning for Predicting a Multiregional Score Conveying the Degree of Lung Compromise in COVID-19 Patients

    No full text
    This article's main contributions are twofold: 1) to demonstrate how to apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare and 2) to investigate the research question of what does "trustworthy AI" mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multiregional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient's lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia, Italy, since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses sociotechnical scenarios to identify ethical, technical, and domain-specific issues in the use of the AI system in the context of the pandemic
    corecore